Although recent deep learning methods, especially generative models, have shown good performance in fast magnetic resonance imaging, there is still much room for improvement in high-dimensional generation. Considering that internal dimensions in score-based generative models have a critical impact on estimating the gradient of the data distribution, we present a new idea, low-rank tensor assisted k-space generative model (LR-KGM), for parallel imaging reconstruction. This means that we transform original prior information into high-dimensional prior information for learning. More specifically, the multi-channel data is constructed into a large Hankel matrix and the matrix is subsequently folded into tensor for prior learning. In the testing phase, the low-rank rotation strategy is utilized to impose low-rank constraints on tensor output of the generative network. Furthermore, we alternately use traditional generative iterations and low-rank high-dimensional tensor iterations for reconstruction. Experimental comparisons with the state-of-the-arts demonstrated that the proposed LR-KGM method achieved better performance.
translated by 谷歌翻译
Deep learning (DL)-based tomographic SAR imaging algorithms are gradually being studied. Typically, they use an unfolding network to mimic the iterative calculation of the classical compressive sensing (CS)-based methods and process each range-azimuth unit individually. However, only one-dimensional features are effectively utilized in this way. The correlation between adjacent resolution units is ignored directly. To address that, we propose a new model-data-driven network to achieve tomoSAR imaging based on multi-dimensional features. Guided by the deep unfolding methodology, a two-dimensional deep unfolding imaging network is constructed. On the basis of it, we add two 2D processing modules, both convolutional encoder-decoder structures, to enhance multi-dimensional features of the imaging scene effectively. Meanwhile, to train the proposed multifeature-based imaging network, we construct a tomoSAR simulation dataset consisting entirely of simulation data of buildings. Experiments verify the effectiveness of the model. Compared with the conventional CS-based FISTA method and DL-based gamma-Net method, the result of our proposed method has better performance on completeness while having decent imaging accuracy.
translated by 谷歌翻译
Benefiting from a relatively larger aperture's angle, and in combination with a wide transmitting bandwidth, near-field synthetic aperture radar (SAR) provides a high-resolution image of a target's scattering distribution-hot spots. Meanwhile, imaging result suffers inevitable degradation from sidelobes, clutters, and noises, hindering the information retrieval of the target. To restore the image, current methods make simplified assumptions; for example, the point spread function (PSF) is spatially consistent, the target consists of sparse point scatters, etc. Thus, they achieve limited restoration performance in terms of the target's shape, especially for complex targets. To address these issues, a preliminary study is conducted on restoration with the recent promising deep learning inverse technique in this work. We reformulate the degradation model into a spatially variable complex-convolution model, where the near-field SAR's system response is considered. Adhering to it, a model-based deep learning network is designed to restore the image. A simulated degraded image dataset from multiple complex target models is constructed to validate the network. All the images are formulated using the electromagnetic simulation tool. Experiments on the dataset reveal their effectiveness. Compared with current methods, superior performance is achieved regarding the target's shape and energy estimation.
translated by 谷歌翻译
This work focuses on 3D Radar imaging inverse problems. Current methods obtain undifferentiated results that suffer task-depended information retrieval loss and thus don't meet the task's specific demands well. For example, biased scattering energy may be acceptable for screen imaging but not for scattering diagnosis. To address this issue, we propose a new task-oriented imaging framework. The imaging principle is task-oriented through an analysis phase to obtain task's demands. The imaging model is multi-cognition regularized to embed and fulfill demands. The imaging method is designed to be general-ized, where couplings between cognitions are decoupled and solved individually with approximation and variable-splitting techniques. Tasks include scattering diagnosis, person screen imaging, and parcel screening imaging are given as examples. Experiments on data from two systems indicate that the pro-posed framework outperforms the current ones in task-depended information retrieval.
translated by 谷歌翻译
视频综合孔径雷达(视频 - 萨尔)图像之间的移动目标阴影总是被低散射背景和混乱的噪音干扰,从而导致移动目标阴影检测跟踪性能不良。为了解决这个问题,这封信提出了一个名为SBN-3D-SD的暗影 - 背景3D空间隔离方法,以提高阴影显着性,以提高视频 - 萨尔移动目标影像阴影检测跟踪性能。
translated by 谷歌翻译
随着卷积神经网络(CNN)的蓬勃发展,诸如VGG-16和Resnet-50之类的CNN广泛用作SAR船检测中的骨架。但是,基于CNN的骨干很难对远程依赖性进行建模,并且导致缺乏浅层特征图中缺乏足够的高质量语义信息,从而导致在复杂的背景和小型船只中的检测性能不佳。为了解决这些问题,我们提出了一种基于SWIN Transformer的SAR船检测方法,并提出了功能增强功能功能金字塔网络(FEFPN)。SWIN Transformer用作建模远程依赖性并生成层次特征图的骨架。提出了FEFPN,以进一步提高特征地图的质量,通过逐渐增强各级特征地图的语义信息,尤其是浅层中的特征地图。在SAR船检测数据集(SSDD)上进行的实验揭示了我们提出的方法的优势。
translated by 谷歌翻译
心肌的准确分割和运动估计在临床领域一直很重要,这基本上有助于下游诊断。但是,现有方法不能始终保证心肌分割的形状完整性。此外,运动估计需要在不同帧上对心肌区域的点对应关系。在本文中,我们提出了一种新型的端到端深度统计形状模型,以关注具有形状完整性和边界对应关系的心肌分割。具体而言,心肌形状由固定数量的点表示,其变化是通过主成分分析(PCA)提取的。深神经网络用于预测转换参数(仿射和变形),然后将其用于将平均点云转转到图像域。此外,引入了一个可区分的渲染层,以将掩码的监督纳入框架中,以了解更准确的点云。通过这种方式,所提出的方法能够在不进行后处理的情况下始终如一地产生解剖上合理的分割掩码。此外,预测的点云还保证了顺序图像的边界对应关系,这有助于下游任务,例如心肌的运动估计。我们进行了几项实验,以证明在几个基准数据集上提出的方法的有效性。
translated by 谷歌翻译
深度学习方法为多级医学图像细分实现了令人印象深刻的表现。但是,它们的编码不同类别(例如遏制和排除)之间拓扑相互作用的能力受到限制。这些约束自然出现在生物医学图像中,对于提高分割质量至关重要。在本文中,我们介绍了一个新型的拓扑交互模块,将拓扑相互作用编码为深神经网络。该实施完全基于卷积,因此非常有效。这使我们有能力将约束结合到端到端培训中,并丰富神经网络的功能表示。该方法的功效在不同类型的相互作用上得到了验证。我们还证明了该方法在2D和3D设置以及跨越CT和超声之类的不同模式中的专有和公共挑战数据集上的普遍性。代码可在以下网址找到:https://github.com/topoxlab/topointeraction
translated by 谷歌翻译
如何充分利用极化来增强合成孔径雷达(SAR)船舶分类仍然是一个尚未解决的问题。因此,我们提出了一个双极化信息引导网络(DPIG-NET)来解决它。
translated by 谷歌翻译
现有的大多数合成孔径雷达(SAR)船舶内部分割模型无法实现掩模间间隔或提供有限的相互作用性能。此外,他们的多尺度船舶实例细分性能是中等的,特别是对于小型船只。为了解决这些问题,我们为SAR Ship实例细分提出了掩盖注意力交互和规模增强网络(MAI-SE-NET)。 MAI使用非常空间的Pyra-Mid合并(ASPP)来获得多分辨率的功能重新发起,非本地块(NLB)来模拟远距离SPA-TIAL依赖性,并串联冲水注意块(CSAB)到提高相互作用的好处。 SE使用内容感知的重新组装功能块(CARAFEB)来生成额外的金字塔底部底层,以提高小型船舶性能,功能平衡操作(FBO)以改善比例功能描述以及全球上下文块(GCB)以完善特征。两个公共SSDD和HRSID数据集的实验结果表明,MAI-SE-NET优于其他九个竞争模型,比SSDD上的4.7%detection AP和3.4%的细分AP更好地比次优模型和3.0%的检测AP和2.4%。 %分割AP在HRSID上。
translated by 谷歌翻译